20 research outputs found

    Rational Krylov approximation of matrix functions: Numerical methods and optimal pole selection

    Get PDF
    Matrix functions are a central topic of linear algebra, and problems of their numerical approximation appear increasingly often in scientific computing. We review various rational Krylov methods for the computation of large-scale matrix functions. Emphasis is put on the rational Arnoldi method and variants thereof, namely, the extended Krylov subspace method and the shift-and-invert Arnoldi method, but we also discuss the nonorthogonal generalized Leja point (or PAIN) method. The issue of optimal pole selection for rational Krylov methods applied for approximating the resolvent and exponential function, and functions of Markov type, is treated in some detail

    Convergence of linear barycentric rational interpolation for analytic functions

    Get PDF
    Polynomial interpolation to analytic functions can be very accurate, depending on the distribution of the interpolation nodes. However, in equispaced nodes and the like, besides being badly conditioned, these interpolants fail to converge even in exact arithmetic in some cases. Linear barycentric rational interpolation with the weights presented by Floater and Hormann can be viewed as blended polynomial interpolation and often yields better approximation in such cases. This has been proven for differentiable functions and indicated in several experiments for analytic functions. So far, these rational interpolants have been used mainly with a constant parameter usually denoted by d, the degree of the blended polynomials, which leads to small condition numbers but to merely algebraic convergence. With the help of logarithmic potential theory we derive asymptotic convergence results for analytic functions when this parameter varies with the number of nodes. Moreover, we present suggestions on how to choose d in order to observe fast and stable convergence, even in equispaced nodes where stable geometric convergence is provably impossible. We demonstrate our results with several numerical examples

    A black-box rational Arnoldi variant for Cauchy-Stieltjes matrix functions

    Get PDF
    Rational Arnoldi is a powerful method for approximating functions of large sparse matrices times a vector. The selection of asymptotically optimal parameters for this method is crucial for its fast convergence. We present and investigate a novel strategy for the automated parameter selection when the function to be approximated is of Cauchy-Stieltjes (or Markov) type, such as the matrix square root or the logarithm. The performance of this approach is demonstrated by numerical examples involving symmetric and nonsymmetric matrices. These examples suggest that our black-box method performs at least as well, and typically better, as the standard rational Arnoldi method with parameters being manually optimized for a given matrix

    Some observations on weighted GMRES

    Get PDF
    We investigate the convergence of the weighted GMRES method for solving linear systems. Two different weighting variants are compared with unweighted GMRES for three model problems, giving a phenomenological explanation of cases where weighting improves convergence, and a case where weighting has no effect on the convergence. We also present new alternative implementations of the weighted Arnoldi algorithm which may be favorable in terms of computational complexity, and examine stability issues connected with these implementations. Two implementations of weighted GMRES are compared for a large number of examples. We find that weighted GMRES may outperform unweighted GMRES for some problems, but more often this method is not competitive with other Krylov subspace methods like GMRES with deflated restarting or BICGSTAB, in particular when a preconditioner is used

    Robust Padé approximation via SVD

    Get PDF
    Padé approximation is considered from the point of view of robust methods of numerical linear algebra, in particular the singular value decomposition. This leads to an algorithm for practical computation that bypasses most problems of solution of nearly-singular systems and spurious pole-zero pairs caused by rounding errors; a Matlab code is provided. The success of this algorithm suggests that there might be variants of Padé approximation that would be pointwise convergent as the degrees of the numerator and denominator increase to infinity, unlike traditional Padé approximants, which converge only in measure or capacity

    Zolotarev Quadrature Rules and Load Balancing for the FEAST Eigensolver

    Full text link
    The FEAST method for solving large sparse eigenproblems is equivalent to subspace iteration with an approximate spectral projector and implicit orthogonalization. This relation allows to characterize the convergence of this method in terms of the error of a certain rational approximant to an indicator function. We propose improved rational approximants leading to FEAST variants with faster convergence, in particular, when using rational approximants based on the work of Zolotarev. Numerical experiments demonstrate the possible computational savings especially for pencils whose eigenvalues are not well separated and when the dimension of the search space is only slightly larger than the number of wanted eigenvalues. The new approach improves both convergence robustness and load balancing when FEAST runs on multiple search intervals in parallel.Comment: 22 pages, 8 figure

    Efficient high-order rational integration and deferred correction with equispaced data

    No full text
    Stable high-order linear interpolation schemes are well suited for the accurate approximation of antiderivatives and the construction of efficient quadrature rules. In this paper we utilize for this purpose the family of linear barycentric rational interpolants by Floater and Hormann, which are particularly useful for interpolation with equispaced nodes. We analyze the convergence of integrals of these interpolants to those of analytic functions as well as functions with a finite number of continuous derivatives. As a by-product, our convergence analysis leads to an extrapolation scheme for rational quadrature at equispaced nodes. Furthermore, as a main application of our analysis, we present and investigate a new iterated deferred correction method for the solution of initial value problems, which allows to work efficiently even with large numbers of equispaced data. This so-called rational deferred correction (RDC) method turns out to be highly competitive with other methods relying on more involved implementations or non-equispaced node distributions. Extensive numerical experiments are carried out, comparing the RDC method to the well established spectral deferred correction (SDC) method by Dutt, Greengard and Rokhlin

    Convergence of linear barycentric rational interpolation for analytic functions

    No full text
    Polynomial interpolation to analytic functions can be very accurate, depending on the distribution of the interpolation nodes. However, in equispaced nodes and the like, besides being badly conditioned, these interpolants fail to converge even in exact arithmetic in some cases. Linear barycentric rational interpolation with the weights presented by Floater and Hormann can be viewed as blended polynomial interpolation and often yields better approximation in such cases. This has been proven for differentiable functions and indicated in several experiments for analytic functions. So far, these rational interpolants have been used mainly with a constant parameter usually denoted by d, the degree of the blended polynomials, which leads to small condition numbers but to merely algebraic convergence. With the help of logarithmic potential theory we derive asymptotic convergence results for analytic functions when this parameter varies with the number of nodes. Moreover, we present suggestions on how to choose d in order to observe fast and stable convergence, even in equispaced nodes where stable geometric convergence is provably impossible. We demonstrate our results with several numerical examples
    corecore